Theory-based Bayesian models of inductive reasoning
نویسندگان
چکیده
Philosophers since Hume have struggled with the logical problem of induction, but children solve an even more difficult task — the practical problem of induction. Children somehow manage to learn concepts, categories, and word meanings, and all on the basis of a set of examples that seems hopelessly inadequate. The practical problem of induction does not disappear with adolescence: adults face it every day whenever they make any attempt to predict an uncertain outcome. Inductive inference is a fundamental part of everyday life, and for cognitive scientists, a fundamental phenomenon of human learning and reasoning in need of computational explanation. There are at least two important kinds of questions that we can ask about human inductive capacities. First, what is the knowledge on which a given instance of induction is based? Second, how does that knowledge support generalization beyond the specific data observed: how do we judge the strength of an inductive argument from a given set of premises to new cases, or infer which new entities fall under a concept given a set of examples? We provide a computational approach to answering these questions. Experimental psychologists have studied both the process of induction and the nature of prior knowledge representations in depth, but previous computational models of induction have tended to emphasize process to the exclusion of knowledge representation. The approach we describe here attempts to redress this imbalance, by showing how domain-specific prior knowledge can be formalized as a crucial ingredient in a domain-general framework for rational statistical inference. The value of prior knowledge has been attested by both psychologists and machine learning theorists, but with somewhat different emphases. Formal analyses in machine learning show that meaningful generalization is not possible unless a learner begins with some sort of inductive bias: some set of constraints on the space of hypotheses that will be considered (Mitchell, 1997). However, the best known statistical machine-learning algorithms adopt relatively weak inductive biases and thus require much more data for successful generalization than humans do: tens or hundreds of positive and negative examples, in contrast to the human ability to generalize from just one or few positive examples. These machine algorithms lack ways to represent and exploit the rich forms of prior knowledge that guide people’s inductive inferences, and that have been the focus of much attention in cognitive and developmental psychology under the name of “intuitive theories” (Murphy and Medin, 1985). Murphy (1993) characterizes an intuitive theory as “a set of causal relations that collectively generate or explain the phenomena in a domain.” We think of a theory more generally as any system of abstract principles that generates hypotheses for inductive inference in a domain, such as hypotheses about the meanings of new concepts, the conditions for new rules, or the extensions of new properties in that domain. Carey (1985), Wellman and Gelman (1992), and Gopnik and Meltzoff (1997) emphasize the central role of intuitive theories in cognitive development, both as sources of constraint on children’s inductive reasoning and as the locus of deep conceptual change. Only recently have psychologists begun to consider seriously the roles that these intuitive theories might play in formal models of inductive inference (Gopnik and Schulz, 2004; Tenenbaum,
منابع مشابه
Combining causal and similarity-based reasoning
Everyday inductive reasoning draws on many kinds of knowledge, including knowledge about relationships between properties and knowledge about relationships between objects. Previous accounts of inductive reasoning generally focus on just one kind of knowledge: models of causal reasoning often focus on relationships between properties, and models of similarity-based reasoning often focus on simi...
متن کاملTheory-based Bayesian models of inductive learning and reasoning.
Inductive inference allows humans to make powerful generalizations from sparse data when learning about word meanings, unobserved properties, causal relationships, and many other aspects of the world. Traditional accounts of induction emphasize either the power of statistical learning, or the importance of strong constraints from structured domain knowledge, intuitive theories or schemas. We ar...
متن کاملTheory-based Bayesian models of inductive reasoning 1 Running head: THEORY-BASED BAYESIAN MODELS OF INDUCTIVE REASONING Theory-based Bayesian models of inductive reasoning
principles of an intuitive theory are relevant in this context; these principles are analogous to the taxonomic and mutation principles underlying the evolutionary model in the last section. First, a structured representation captures the relevant relations between entities in the domain: in this context, we posit a set of directed predator-prey relations. An example of such a food web is shown...
متن کاملInductive reasoning.
Inductive reasoning entails using existing knowledge or observations to make predictions about novel cases. We review recent findings in research on category-based induction as well as theoretical models of these results, including similarity-based models, connectionist networks, an account based on relevance theory, Bayesian models, and other mathematical models. A number of touchstone empiric...
متن کاملDynamics of inductive inference in a unified framework
We present a model of inductive inference that includes, as special cases, Bayesian reasoning, case-based reasoning, and rulebased reasoning. This unified framework allows us to examine, positively or normatively, how the various modes of inductive inference can be combined and how their relative weights change endogenously. We establish conditions under which an agent who does not know the str...
متن کاملBayesian Models of Inductive Generalization
We argue that human inductive generalization is best explained in a Bayesian framework, rather than by traditional models based on similarity computations. We go beyond previous work on Bayesian concept learning by introducing an unsupervised method for constructing flexible hypothesis spaces, and we propose a version of the Bayesian Occam’s razor that trades off priors and likelihoods to preve...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
دوره شماره
صفحات -
تاریخ انتشار 2006